Goto

Collaborating Authors

 nuclear norm


Efficient Convex Completion of Coupled Tensors using Coupled Nuclear Norms

Neural Information Processing Systems

Coupled norms have emerged as a convex method to solve coupled tensor completion. A limitation with coupled norms is that they only induce low-rankness using the multilinear rank of coupled tensors. In this paper, we introduce a new set of coupled norms known as coupled nuclear norms by constraining the CP rank of coupled tensors. We propose new coupled completion models using the coupled nuclear norms as regularizers, which can be optimized using computationally efficient optimization methods. We derive excess risk bounds for proposed coupled completion models and show that proposed norms lead to better performance. Through simulation and real-data experiments, we demonstrate that proposed norms achieve better performance for coupled completion compared to existing coupled norms.


c0c783b5fc0d7d808f1d14a6e9c8280d-Paper.pdf

Neural Information Processing Systems

A major hurdle in this study is that implicit regularization in deep learning seems to kick in only withcertain types ofdata(notwithrandom dataforexample), andwelackmathematical tools for reasoning about real-life data. Thus one needs a simple test-bed for the investigation, where data admits a crisp mathematical formulation. Following earlier works, we focus on the problem of matrix completion: given a randomly chosen subset of entries from an unknown matrixW, the taskistorecovertheunseen entries. Tocastthisasaprediction problem, wemayvieweach entry inW as a data point: observed entries constitute the training set, and the average reconstruction error over the unobserved entries is the test error,quantifying generalization. Fitting the observed entries is obviously an underdetermined problem with multiple solutions.



Decentralized sketching of low rank matrices

Rakshith Sharma Srinivasa, Kiryung Lee, Marius Junge, Justin Romberg

Neural Information Processing Systems

A fundamental structural model for data is that the data points lie close to an unknown subspace, meaning that the matrix created by concatenating the data vectors has low rank. We address a particular low-rank matrix recovery problem where we wish to recover a set of vectors from a low-dimensional subspace after they have been individually compressed (or "sketched").




AnovelvariationalformoftheSchatten-pquasi-norm

Neural Information Processing Systems

Clearly, due to nonconvexity, the task of finding a minimizer of (1) canbequite challenging andpotentially limits what canbeguaranteed theoretically about solving problems with form(1).